30 research outputs found

    Sinking Ships

    Get PDF

    Social Influence in HRI with Application to Social Robots for Rehabilitation

    Get PDF

    Expert-informed design and automation of persuasive, socially assistive robots

    Get PDF
    Socially assistive robots primarily provide useful functionality through their social interactions with user(s). An example application, used to ground work throughout this thesis, is using a social robot to guide users through exercise sessions. Initial works have demonstrated that interactions with a social robot can improve engagement with exercise, and that an embodied social robot is more effective for this than the equivalent virtual avatar. However, many questions remain regarding the design and automation of socially assistive robot behaviours for this purpose. This thesis identifies and practically works through a number of these questions in pursuit of one ultimate goal: the meaningful, real world deployment of a fully autonomous, socially assistive robot. The work takes an expert informed approach, looking to learn from human experts in socially assistive interactions and explore how their expert knowledge can be reflected in the design and automation of social robot behaviours. It is taking this approach that leads to the notion of socially assistive robots needing to be persuasive in order to be effective, but also identifies the difficulty in automating such complex, socially intelligent behaviour. The ethical implications of designing persuasive robot behaviours are also practically considered; with reference to a published standard on ethical robot design. The work culminates with use of a state of the art, interactive machine learning approach to have an expert fitness instructor train a robot ‘fitness coach’, deployed in a university gym, as it guides participants through an NHS exercise programme. After a total of 151 training sessions across 10 participants, the robot successfully ran 32 sessions autonomously. The results demonstrated that autonomous behaviour was generally comparable to that of the robot when controlled/supervised by the fitness instructor, and that overall, the robot played an important role in keeping participants motivated through the exercise programme

    Robots Need the Ability to Navigate Abusive Interactions

    Get PDF
    Researchers are seeing more and more cases of abusive disinhibition towards robots in public realms. Because robots embody gendered identities, poor navigation of antisocial dynamics may reinforce or exacerbate gender-based violence. It is essential that robots deployed in social settings be able to recognize and respond to abuse in a way that minimises ethical risk. Enabling this capability requires designers to first understand the risk posed by abuse of robots, and hence how humans perceive robot-directed abuse. To that end, we experimentally investigated reactions to a physically abusive interaction between a human perpetrator and a victimized agent. Given extensions of gendered biases to robotic agents, as well as associations between an agent’s human likeness and the experiential capacity attributed to it, we quasi-manipulated the victim’s humanness (via use of a human actor vs. NAO robot) and gendering (via inclusion of stereotypically masculine vs. feminine cues in their presentation) across four video-recorded reproductions of the interaction. Analysis of data from 417 participants, each of whom watched one of the four videos, indicates that the intensity of emotional distress felt by an observer is associated with their gender identification, previous experience with victimization, hostile sexism, and support for social stratification, as well as the victim’s gendering

    Enactive artificial intelligence: subverting gender norms in human-robot interaction

    Get PDF
    IntroductionThis paper presents Enactive Artificial Intelligence (eAI) as a gender-inclusive approach to AI, emphasizing the need to address social marginalization resulting from unrepresentative AI design.MethodsThe study employs a multidisciplinary framework to explore the intersectionality of gender and technoscience, focusing on the subversion of gender norms within Robot-Human Interaction in AI.ResultsThe results reveal the development of four ethical vectors, namely explainability, fairness, transparency, and auditability, as essential components for adopting an inclusive stance and promoting gender-inclusive AI.DiscussionBy considering these vectors, we can ensure that AI aligns with societal values, promotes equity and justice, and facilitates the creation of a more just and equitable society

    A case study in designing trustworthy interactions: implications for socially assistive robotics

    Get PDF
    This work is a case study in applying recent, high-level ethical guidelines, specifically concerning transparency and anthropomorphisation, to Human-Robot Interaction (HRI) design practice for a real-world Socially Assistive Robot (SAR) application. We utilize an online study to investigate how the perception and efficacy of SARs might be influenced by this design practice, examining how robot utterances and display manipulations influence perceptions of the robot and the medical recommendations it gives. Our results suggest that applying transparency policies can improve the SAR's effectiveness without harming its perceived anthropomorphism. However, our objective measures suggest participant understanding of the robot's decision-making process remained low across conditions. Furthermore, verbal anthropomorphisation does not seem to affect the perception or efficacy of the robot

    Effective Persuasion Strategies for Socially Assistive Robots

    Get PDF
    corecore